27 research outputs found

    Counterexample of a Claim Pertaining to the Synthesis of a Recurrent Neural Network

    Get PDF
    Recurrent neural networks have received much attention due to their nonlinear dynamic behavior. One such type of dynamic behavior is that of setting a fixed stable state. This paper shows a counterexample to the claim of A.N. Michel et al. (IEEE Control Systems Magazine, vol. 15, pp. 52-65, Jun. 1995), that sparse constraints on the interconnecting structure for a given neural network are usually expressed as constraints which require that pre-determined elements of T [a real n×n matrix acting on a real n-vector valued function] be zero , for the synthesis of sparsely interconnected recurrent neural networks

    A Parallel Computer-Go Player, using HDP Method

    Get PDF
    The game of Go has simple rules to learn but requires complex strategies to play well, and, the conventional tree search algorithm for computer games is not suited for Go program. Thus, the game of Go is an ideal problem domain for machine learning algorithms. This paper examines the performance of a 19x19 computer Go player, using heuristic dynamic programming (HDP) and parallel alpha-beta search. The neural network based Go player learns good Go evaluation functions and wins about 30% of the games in a test series on 19x19 boar

    Engine Data Classification with Simultaneous Recurrent Network using a Hybrid PSO-EA Algorithm

    Get PDF
    We applied an architecture which automates the design of simultaneous recurrent network (SRN) using a new evolutionary learning algorithm. This new evolutionary learning algorithm is based on a hybrid of particle swarm optimization (PSO) and evolutionary algorithm (EA). By combining the searching abilities of these two global optimization methods, the evolution of individuals is no longer restricted to be in the same generation, and better performed individuals may produce offspring to replace those with poor performance. The novel algorithm is then applied to the simultaneous recurrent network for the engine data classification. The experimental results show that our approach gives solid performance in categorizing the nonlinear car engine data

    Gene Expression Data for DLBCL Cancer Survival Prediction with a Combination of Machine Learning Technologies

    Get PDF
    Gene expression profiles have become an important and promising way for cancer prognosis and treatment. In addition to their application in cancer class prediction and discovery, gene expression data can be used for the prediction of patient survival. Here, we use particle swarm optimization (PSO) to address one of the major challenges in gene expression data analysis, the curse of dimensionality, in order to discriminate high risk patients from low risk patients. A discrete binary version of PSO is used for gene selection and dimensionality reduction, and a probabilistic neural network (PNN) is implemented as the classifier. The experimental results on the diffuse large B-cell lymphoma data set demonstrate the effectiveness of PSO/PNN system in survival prediction

    Improving Local Search for Structured SAT Formulas via Unit Propagation Based Construct and Cut Initialization (Short Paper)

    Get PDF
    This work is dedicated to improving local search solvers for the Boolean satisfiability (SAT) problem on structured instances. We propose a construct-and-cut (CnC) algorithm based on unit propagation, which is used to produce initial assignments for local search. We integrate our CnC initialization procedure within several state-of-the-art local search SAT solvers, and obtain the improved solvers. Experiments are carried out with a benchmark encoded from a spectrum repacking project as well as benchmarks encoded from two important mathematical problems namely Boolean Pythagorean Triple and Schur Number Five. The experiments show that the CnC initialization improves the local search solvers, leading to better performance than state-of-the-art SAT solvers based on Conflict Driven Clause Learning (CDCL) solvers

    Training Winner-Take-All Simultaneous Recurrent Neural Networks

    Get PDF
    The winner-take-all (WTA) network is useful in database management, very large scale integration (VLSI) design, and digital processing. The synthesis procedure of WTA on single-layer fully connected architecture with sigmoid transfer function is still not fully explored. We discuss the use of simultaneous recurrent networks (SRNs) trained by Kalman filter algorithms for the task of finding the maximum among N numbers. The simulation demonstrates the effectiveness of our training approach under conditions of a shared-weight SRN architecture. A more general SRN also succeeds in solving a real classification application on car engine data

    Time Series Prediction with Recurrent Neural Networks using a Hybrid PSO-EA Algorithm

    Get PDF
    To predict the 100 missing values from the time series consisting of 5000 data given for the IJCNN 2004 time series prediction competition, we applied an architecture which automates the design of recurrent neural networks using a new evolutionary learning algorithm. This new evolutionary learning algorithm is based on a hybrid of particle swarm optimization (PSO) and evolutionary algorithm (EA). By combining the searching abilities of these two global optimization methods, the evolution of individuals is no longer restricted to be in the same generation, and better performed individuals may produce offspring to replace those with poor performance. The novel algorithm is then applied to the recurrent neural network for the time series prediction. The experimental results show that our approach gives good performance in predicting the missing values from the time series

    A Statistical Solution to a Text Decoding Challenge Problem

    Get PDF
    Given an encoded unknown text message in the form of a three dimensional spatial series generated by the use of four smooth nonlinear functions, we use a method based on simple statistical reasoning to pick up samples for rebuilding the four functions. The estimated functions are then used to decode the sequence. The experimental results show that our method gives a nearly perfect decoding, enabling us to submit a 100% accurate solution to the IJCNN challenge proble

    S3IM: Stochastic Structural SIMilarity and Its Unreasonable Effectiveness for Neural Fields

    Full text link
    Recently, Neural Radiance Field (NeRF) has shown great success in rendering novel-view images of a given scene by learning an implicit representation with only posed RGB images. NeRF and relevant neural field methods (e.g., neural surface representation) typically optimize a point-wise loss and make point-wise predictions, where one data point corresponds to one pixel. Unfortunately, this line of research failed to use the collective supervision of distant pixels, although it is known that pixels in an image or scene can provide rich structural information. To the best of our knowledge, we are the first to design a nonlocal multiplex training paradigm for NeRF and relevant neural field methods via a novel Stochastic Structural SIMilarity (S3IM) loss that processes multiple data points as a whole set instead of process multiple inputs independently. Our extensive experiments demonstrate the unreasonable effectiveness of S3IM in improving NeRF and neural surface representation for nearly free. The improvements of quality metrics can be particularly significant for those relatively difficult tasks: e.g., the test MSE loss unexpectedly drops by more than 90% for TensoRF and DVGO over eight novel view synthesis tasks; a 198% F-score gain and a 64% Chamfer L1L_{1} distance reduction for NeuS over eight surface reconstruction tasks. Moreover, S3IM is consistently robust even with sparse inputs, corrupted images, and dynamic scenes.Comment: ICCV 2023 main conference. Code: https://github.com/Madaoer/S3IM. 14 pages, 5 figures, 17 table

    Advanced architecture and training algorithms for recurrent neural networks

    No full text
    Recurrent neural networks (RNN) attract considerable interest in computational intelligence fields because of their superior power in processing spatio-temporal data and time-varying signals. Traditionally, the recurrency of a neural network occurs between input samples along the time axis. The simultaneous recurrent network (SRN) extends the recurrent property to the spatial dimension. Presenting the feedback information with the same input vector to the network illustrates the transient properties of the system, which helps to trace error propagation and facilitates training. Backpropagation through time and extended Kalman filter are suitable gradient-based training algorithms for RNN --Abstract, page iv
    corecore